27 research outputs found

    Voting, Deliberation and Truth

    Get PDF
    There are various ways to reach a group decision on a factual yes-no question. One way is to vote and decide what the majority votes for. This procedure receives some epistemological support from the Condorcet Jury Theorem. Alternatively, the group members may prefer to deliberate and will eventually reach a decision that everybody endorses - a consensus. While the latter procedure has the advantage that it makes everybody happy (as everybody endorses the consensus), it has the disadvantage that it is difficult to implement, especially for larger groups. Besides, the resulting consensus may be far away from the truth. And so we ask: Is deliberation truth-conducive in the sense that majority voting is? To address this question, we construct a highly idealized model of a particular deliberation process, inspired by the movie Twelve Angry Men, and show that the answer is "yes". Deliberation procedures can be truth-conducive just as the voting procedure is. We then explore, again on the basis of our model and using agent-based simulations, under which conditions it is better epistemically to deliberate than to vote. Our analysis shows that there are contexts in which deliberation is epistemically preferable and we will provide reasons for why this is so

    Anchoring in Deliberations

    Get PDF
    Deliberation is a standard procedure to make decisions in not too large groups. It has the advantage that the group members can learn from each other and that, at the end, often a consensus emerges that everybody endorses. But a deliberation procedure also has a number of disadvantages. E.g., what consensus is reached usually depends on the order in which the different group members speak. More specifically, the group member who speaks first often has an unproportionally high impact on the final decision: She anchors the deliberation process. While the anchoring effect undoubtably appears in real deliberating groups, we ask whether it also appears in groups whose members are truth-seeking and rational in the sense that they take the information provided by their fellow group members properly into account by updating their beliefs according to plausible rules. To answer this question and to make some progress towards explaining the anchoring effect, a formal model is constructed and analyzed. Using this model, we study the anchoring effect in homogenous groups (i.e. groups whose members consider each other as equally reliable), for which we provide analytical results, and in inhomogeneous groups, for which we provide simulation results

    Anchoring in Deliberations

    Get PDF
    Deliberation is a standard procedure for making decisions in not too large groups. It has the advantage that group members can learn from each other and that, at the end, often a consensus emerges that everybody endorses. Unfortunately, however, implementing a deliberation procedure also has a number of disadvantages due to the cognitive limitations of the individual group members. What is more, the very process of deliberation introduces an additional bias which we investigate in this article. We demonstrate that even in a group of (boundedly) rational agents the resulting consensus (if there is one) depends on the order in which the group members speak. More specifically, the group member who speaks first has an unproportionally high impact on the final decision, which we interpret as a new instance of the well-known anchoring effect.To show this, we construct and analyze an agent-based model -- inspired by the disagreement debate in social epistemology -- and obtain analytical results for homogenous groups (i.e. for groups whose members consider each other as epistemic peers) as well as simulation results for inhomogeneous groups

    Probabilities with Gaps and Gluts

    Get PDF
    Belnap-Dunn logic (BD), sometimes also known as First Degree Entailment, is a four-valued propositional logic that complements the classical truth values of True and False with two non-classical truth values Neither and Both. The latter two are to account for the possibility of the available information being incomplete or providing contradictory evidence. In this paper, we present a probabilistic extension of BD that permits agents to have probabilistic beliefs about the truth and falsity of a proposition. We provide a sound and complete axiomatization for the framework defined and also identify policies for conditionalization and aggregation. Concretely, we introduce four-valued equivalents of Bayes' and Jeffrey updating and also suggest mechanisms for aggregating information from different sources

    Learning Probabilities: Towards a Logic of Statistical Learning

    Get PDF
    We propose a new model for forming beliefs and learning about unknown probabilities (such as the probability of picking a red marble from a bag with an unknown distribution of coloured marbles). The most widespread model for such situations of 'radical uncertainty' is in terms of imprecise probabilities, i.e. representing the agent's knowledge as a set of probability measures. We add to this model a plausibility map, associating to each measure a plausibility number, as a way to go beyond what is known with certainty and represent the agent's beliefs about probability. There are a number of standard examples: Shannon Entropy, Centre of Mass etc. We then consider learning of two types of information: (1) learning by repeated sampling from the unknown distribution (e.g. picking marbles from the bag); and (2) learning higher-order information about the distribution (in the shape of linear inequalities, e.g. we are told there are more red marbles than green marbles). The first changes only the plausibility map (via a 'plausibilistic' version of Bayes' Rule), but leaves the given set of measures unchanged; the second shrinks the set of measures, without changing their plausibility. Beliefs are defined as in Belief Revision Theory, in terms of truth in the most plausible worlds. But our belief change does not comply with standard AGM axioms, since the revision induced by (1) is of a non-AGM type. This is essential, as it allows our agents to learn the true probability: we prove that the beliefs obtained by repeated sampling converge almost surely to the correct belief (in the true probability). We end by sketching the contours of a dynamic doxastic logic for statistical learning.Comment: In Proceedings TARK 2019, arXiv:1907.0833

    Learning from Conditionals

    Get PDF
    In this article, we address a major outstanding question of probabilistic Bayesian epistemology: `How should a rational Bayesian agent update their beliefs upon learning an indicative conditional?'. A number of authors have recently contended that this question is fundamentally underdetermined by Bayesian norms, and hence that there is no single update procedure that rational agents are obliged to follow upon learning an indicative conditional. Here, we resist this trend and argue that a core set of widely accepted Bayesian norms is sufficient to identify a normatively privileged updating procedure for this kind of learning. Along the way, we justify a privileged formalisation of the notion of `epistemic conservativity', offer a new analysis of the Judy Benjamin problem and emphasise the distinction between interpreting the content of new evidence and updating one's beliefs on the basis of that content

    Learning from Conditionals

    Get PDF
    In this article, we address a major outstanding question of probabilistic Bayesian epistemology: `How should a rational Bayesian agent update their beliefs upon learning an indicative conditional?'. A number of authors have recently contended that this question is fundamentally underdetermined by Bayesian norms, and hence that there is no single update procedure that rational agents are obliged to follow upon learning an indicative conditional. Here, we resist this trend and argue that a core set of widely accepted Bayesian norms is sufficient to uniquely identify a single rational updating procedure for this kind of learning. Along the way, we justify a privileged formalisation of the notion of `epistemic conservativity', offer a new analysis of the Judy Benjamin problem and emphasise the distinction between interpreting the content of new evidence and updating one's beliefs on the basis of that content

    Voting, Deliberation and Truth

    Get PDF
    There are various ways to reach a group decision on a factual yes-no question. One way is to vote and decide what the majority votes for. This procedure receives some epistemological support from the Condorcet Jury Theorem. Alternatively, the group members may prefer to deliberate and will eventually reach a decision that everybody endorses - a consensus. While the latter procedure has the advantage that it makes everybody happy (as everybody endorses the consensus), it has the disadvantage that it is difficult to implement, especially for larger groups. Besides, the resulting consensus may be far away from the truth. And so we ask: Is deliberation truth-conducive in the sense that majority voting is? To address this question, we construct a highly idealized model of a particular deliberation process, inspired by the movie Twelve Angry Men, and show that the answer is "yes". Deliberation procedures can be truth-conducive just as the voting procedure is. We then explore, again on the basis of our model and using agent-based simulations, under which conditions it is better epistemically to deliberate than to vote. Our analysis shows that there are contexts in which deliberation is epistemically preferable and we will provide reasons for why this is so

    Learning from Conditionals

    Get PDF
    In this article, we address a major outstanding question of probabilistic Bayesian epistemology: `How should a rational Bayesian agent update their beliefs upon learning an indicative conditional?'. A number of authors have recently contended that this question is fundamentally underdetermined by Bayesian norms, and hence that there is no single update procedure that rational agents are obliged to follow upon learning an indicative conditional. Here, we resist this trend and argue that a core set of widely accepted Bayesian norms is sufficient to identify a normatively privileged updating procedure for this kind of learning. Along the way, we justify a privileged formalisation of the notion of `epistemic conservativity', offer a new analysis of the Judy Benjamin problem and emphasise the distinction between interpreting the content of new evidence and updating one's beliefs on the basis of that content
    corecore